540 research outputs found

    Estimating Parameters of Partial Differential Equations with Gradient Matching

    Get PDF
    Parameter inference in partial differential equations (PDEs) is a problem that many researchers are interested in. The conventional methods suffer from severe computational costs because these method require to solve the PDEs repeatedly by numerical integration. The concept of gradient matching have been proposed in order to reduce the computational complexity, which consists of two steps. First, the data are interpolated with certain smoothing methods. Then, the partial derivatives of the interpolants are calculated and the parameters are optimized to minimize the distance (measured by loss functions) between partial derivatives of interpolants and the PDE systems. In this article, we first studied the parameter inference accuracy of gradient matching based on two simple PDE models. Then the method of gradient matching was used to infer the parameters of PDE models describing cell movement and select the most appropriate model

    Beyond Semantics: Learning a Behavior Augmented Relevance Model with Self-supervised Learning

    Full text link
    Relevance modeling aims to locate desirable items for corresponding queries, which is crucial for search engines to ensure user experience. Although most conventional approaches address this problem by assessing the semantic similarity between the query and item, pure semantic matching is not everything.Comment: Partial conten

    An Unified Search and Recommendation Foundation Model for Cold-Start Scenario

    Full text link
    In modern commercial search engines and recommendation systems, data from multiple domains is available to jointly train the multi-domain model. Traditional methods train multi-domain models in the multi-task setting, with shared parameters to learn the similarity of multiple tasks, and task-specific parameters to learn the divergence of features, labels, and sample distributions of individual tasks. With the development of large language models, LLM can extract global domain-invariant text features that serve both search and recommendation tasks. We propose a novel framework called S\&R Multi-Domain Foundation, which uses LLM to extract domain invariant features, and Aspect Gating Fusion to merge the ID feature, domain invariant text features and task-specific heterogeneous sparse features to obtain the representations of query and item. Additionally, samples from multiple search and recommendation scenarios are trained jointly with Domain Adaptive Multi-Task module to obtain the multi-domain foundation model. We apply the S\&R Multi-Domain foundation model to cold start scenarios in the pretrain-finetune manner, which achieves better performance than other SOTA transfer learning methods. The S\&R Multi-Domain Foundation model has been successfully deployed in Alipay Mobile Application's online services, such as content query recommendation and service card recommendation, etc.Comment: CIKM 2023,6 page

    Protein kinase CK2α is overexpressed in colorectal cancer and modulates cell proliferation and invasion via regulating EMT-related genes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Protein kinase CK2 is a highly conserved, ubiquitous protein serine/threonine kinase that phosphorylates many substrates and has a global role in numerous biological and pathological processes. Overexpression of the protein kinase CK2α subunit (CK2α) has been associated with the malignant transformation of several tissues, with not nearly as much focus on the role of CK2α in colorectal cancer (CRC). The aims of this study are to investigate the function and regulatory mechanism of CK2α in CRC development.</p> <p>Methods</p> <p>Expression levels of CK2α were analyzed in 144 patients (104 with CRC and 40 with colorectal adenoma) by immunohistochemistry. Proliferation, senescence, motility and invasion assays as well as immunofluorescence staining and western blots were performed to assess the effect of CK2α in CRC.</p> <p>Results</p> <p>The immunohistochemical expression of nuclear CK2α was stronger in tumor tissues than in adenomas and normal colorectal tissues. Suppression of CK2α by small-interfering RNA or the CK2α activity inhibitor emodin inhibited proliferation of CRC cells, caused G0/G1 phase arrest, induced cell senescence, elevated the expression of p53/p21 and decreased the expression of C-myc. We also found that knockdown of CK2α suppressed cell motility and invasion. Significantly, CK2α inhibition resulted in β-catenin transactivation, decreased the expression levels of vimentin and the transcription factors snail1 and smad2/3, and increased the expression of E-cadherin, suggesting that CK2α regulates the epithelial-mesenchymal transition (EMT) process in cancer cells.</p> <p>Conclusions</p> <p>Our results indicate that CK2α plays an essential role in the development of CRC, and inhibition of CK2α may serve as a promising therapeutic strategy for human CRC.</p

    tSF: Transformer-based Semantic Filter for Few-Shot Learning

    Full text link
    Few-Shot Learning (FSL) alleviates the data shortage challenge via embedding discriminative target-aware features among plenty seen (base) and few unseen (novel) labeled samples. Most feature embedding modules in recent FSL methods are specially designed for corresponding learning tasks (e.g., classification, segmentation, and object detection), which limits the utility of embedding features. To this end, we propose a light and universal module named transformer-based Semantic Filter (tSF), which can be applied for different FSL tasks. The proposed tSF redesigns the inputs of a transformer-based structure by a semantic filter, which not only embeds the knowledge from whole base set to novel set but also filters semantic features for target category. Furthermore, the parameters of tSF is equal to half of a standard transformer block (less than 1M). In the experiments, our tSF is able to boost the performances in different classic few-shot learning tasks (about 2% improvement), especially outperforms the state-of-the-arts on multiple benchmark datasets in few-shot classification task

    Harnessing the Power of David against Goliath: Exploring Instruction Data Generation without Using Closed-Source Models

    Full text link
    Instruction tuning is instrumental in enabling Large Language Models~(LLMs) to follow user instructions to complete various open-domain tasks. The success of instruction tuning depends on the availability of high-quality instruction data. Owing to the exorbitant cost and substandard quality of human annotation, recent works have been deeply engaged in the exploration of the utilization of powerful closed-source models to generate instruction data automatically. However, these methods carry potential risks arising from the usage requirements of powerful closed-source models, which strictly forbid the utilization of their outputs to develop machine learning models. To deal with this problem, in this work, we explore alternative approaches to generate high-quality instruction data that do not rely on closed-source models. Our exploration includes an investigation of various existing instruction generation methods, culminating in the integration of the most efficient variant with two novel strategies to enhance the quality further. Evaluation results from two benchmarks and the GPT-4 model demonstrate the effectiveness of our generated instruction data, which can outperform Alpaca, a method reliant on closed-source models. We hope that more progress can be achieved in generating high-quality instruction data without using closed-source models
    corecore